Krippendorff's Alpha
   HOME

TheInfoList



OR:

Krippendorff's alpha coefficient, named after academic
Klaus Krippendorff Klaus Krippendorff (1932–2022) was a communication scholar, social science methodologist, and cyberneticist. and was the Gregory Bateson professor for Cybernetics, Language, and Culture at the University of Pennsylvania's Annenberg School for ...
, is a statistical measure of the agreement achieved when coding a set of units of analysis. Since the 1970s, ''alpha'' has been used in
content analysis Content analysis is the study of documents and communication artifacts, which might be texts of various formats, pictures, audio or video. Social scientists use content analysis to examine patterns in communication in a replicable and systematic ...
where textual units are categorized by trained readers, in counseling and
survey research In research of human subjects, a survey is a list of questions aimed for extracting specific data from a particular group of people. Surveys may be conducted by phone, mail, via the internet, and also at street corners or in malls. Surveys are us ...
where experts code open-ended interview data into analyzable terms, in psychological testing where alternative tests of the same phenomena need to be compared, or in
observational studies In fields such as epidemiology, social sciences, psychology and statistics, an observational study draws inferences from a sample (statistics), sample to a statistical population, population where the dependent and independent variables, independ ...
where unstructured happenings are recorded for subsequent analysis. Krippendorff's alpha generalizes several known statistics, often called measures of inter-coder agreement,
inter-rater reliability In statistics, inter-rater reliability (also called by various similar names, such as inter-rater agreement, inter-rater concordance, inter-observer reliability, inter-coder reliability, and so on) is the degree of agreement among independent obse ...
, reliability of coding given sets of units (as distinct from unitizing) but it also distinguishes itself from statistics that are called reliability coefficients but are unsuitable to the particulars of coding data generated for subsequent analysis. Krippendorff's alpha is applicable to any number of coders, each assigning one value to one unit of analysis, to incomplete (missing) data, to any number of values available for coding a variable, to binary, nominal, ordinal, interval, ratio, polar, and circular metrics (note that this is not a metric in the mathematical sense, but often the square of a mathematical metric, see levels of measurement), and it adjusts itself to small sample sizes of the reliability data. The virtue of a single coefficient with these variations is that computed reliabilities are comparable across any numbers of coders, values, different metrics, and unequal sample sizes. Software for calculating Krippendorff's alpha is available.


Reliability data

Reliability data are generated in a situation in which ''m'' ≥ 2 jointly instructed (e.g., by a
code book A codebook is a type of document used for gathering and storing cryptography codes. Originally codebooks were often literally , but today codebook is a byword for the complete record of a series of codes, regardless of physical format. Crypto ...
) but independently working coders assign any one of a set of values 1,...,''V'' to a common set of ''N'' units of analysis. In their canonical form, reliability data are tabulated in an ''m''-by-''N'' matrix containing ''N'' values ''vij'' that coder ''ci'' has assigned to unit ''uj''. Define ''mj'' as the number of values assigned to unit ''j'' across all coders ''c''. When data are incomplete, ''mj'' may be less than ''m''. Reliability data require that values be pairable, i.e., ''mj'' ≥ 2. The total number of pairable values is \sum_^N m_j = ''n'' ≤ ''mN''. To help clarify, here is what the canonical form looks like, in the abstract:


General form of alpha

We denote by R the set of all possible responses an observer can give. The responses of all observers for an example is called a unit (it forms a multiset). We denote a multiset with these units as the items, U. Alpha is given by: :\alpha = 1-\frac where D_o is the disagreement observed and D_e is the disagreement expected by chance. : D_o = \frac\sum_\sum_ \delta(c, k) \sum_ m_u \frac where \delta is a metric function (note that this is not a metric in the mathematical sense, but often the square of a mathematical metric, see below), n is the total number of pairable elements, m_u is the number of items in a unit, n_ number of (c,k) pairs in unit u, and P is the permutation function. Rearranging terms, the sum is can be interpreted in a conceptual way as the weighted average of the disagreements of the individual units---weighted by the number of coders assigned to unit j: D_o = \frac \sum_^N m_j \, \mathbb(\delta_j) where \mathbb(\delta_j) is the mean of the m_j \choose 2 numbers \delta(v_, v_) (here i > i' and define pairable elements). Note that in the case m_j = m for all j, D_o is just the average all the numbers \delta(v_, v_) with i > i'. There is also an interpretation of D_oas the (weighted) average observed distance from the diagonal. : D_e = \frac\sum_\sum_ \delta(c, k) P_ where P_ is the number of ways the pair (c, k) can be made. This can be seen to be the average distance from the diagonal of all possible pairs of responses that could be derived from the multiset of all observations. : P_ = \begin c \ne k & n_c n_k \\ c = k & n_c (n_c - 1) \end The above is equivalent to the usual form of \alpha once it has been simplified algebraically. One interpretation of Krippendorff's ''alpha'' is: \alpha = 1 - \frac :\alpha = 1 indicates perfect reliability :\alpha = 0 indicates the complete absence of reliability. Units and the values assigned to them are statistically unrelated. :\alpha < 0 when disagreements are systematic and exceed what can be expected by chance. In this general form, disagreements ''Do'' and ''De'' may be conceptually transparent but are computationally inefficient. They can be simplified algebraically, especially when expressed in terms of the visually more instructive coincidence matrix representation of the reliability data.


Coincidence matrices

A coincidence matrix cross tabulates the ''n'' pairable values from the canonical form of the reliability data into a ''v''-by-''v'' square matrix, where ''v'' is the number of values available in a variable. Unlike contingency matrices, familiar in association and correlation statistics, which tabulate ''pairs'' of values (
cross tabulation In statistics, a contingency table (also known as a cross tabulation or crosstab) is a type of table in a matrix format that displays the (multivariate) frequency distribution of the variables. They are heavily used in survey research, business ...
), a coincidence matrix tabulates all pairable ''values''. A coincidence matrix omits references to coders and is symmetrical around its diagonal, which contains all perfect matches, ''viu'' = ''vi'u'' for two coders ''i'' and ''i' '', across all units ''u''. The matrix of observed coincidences contains frequencies: : \begin o_ & = \sum_^N \frac = o_, \\ ptn_v & = \sum_^V o_ = \sum_^ I(v_ = v) \text n = \sum_^V o_, \end omitting unpaired values, where ''I''(∘) = 1 if ''∘'' is true, and 0 otherwise. Because a coincidence matrix tabulates all pairable values and its contents sum to the total ''n'', when four or more coders are involved, ''ock'' may be fractions. The matrix of expected coincidences contains frequencies: :\ e_ = \frac = \frac \cdot \left.\begin n_v(n_v-1) & \text v = v' \\ n_vn_ & \text v \ne v' \end \right\} =e_, which sum to the same ''nc'', ''nk'', and ''n'' as does ''ock''. In terms of these coincidences, Krippendorff's ''alpha'' becomes: :\alpha = 1- \frac = 1 - \frac.


Difference functions

Difference functions \delta(v,v') between values ''v'' and ''v' '' reflect the metric properties ( levels of measurement) of their variable. In general: : \begin \delta (v,v') & \ge 0 \\ pt\delta(v,v) & = 0 \\ pt\delta(v,v') & = \delta(v',v) \end In particular: :For nominal data \delta_\text(v,v') = \begin 0 & \textv = v' \\ 1 & \textv \ne v' \end , where ''v'' and ''v' '' serve as names. :For ordinal data \delta_\text(v,v') = \left ( \sum_^ n_g - \frac \right )^2, where ''v '' and ''v''′ are ranks. :For interval data \delta_\text(v,v') = (v - v')^2, where ''v'' and ''v''′ are interval scale values. :For ratio data \delta_\text(v,v') = \left ( \frac \right )^2, where ''v'' and ''v''′ are absolute values. :For polar data \delta_\text(v,v') = \frac , where ''v''min and ''v''max define the end points of the polar scale. :For circular data \delta_\text(v,v') = \left ( \sin \left 80 \frac \right \right )^2, where the sine function is expressed in degrees and ''U'' is the circumference or the range of values in a circle or loop before they repeat. For equal interval circular metrics, the smallest and largest integer values of this metric are adjacent to each other and ''U'' = ''v''largest – ''v''smallest + 1.


Significance

Inasmuch as mathematical statements of the statistical distribution of ''alpha'' are always only approximations, it is preferable to obtain ''alpha’s'' distribution by
bootstrapping In general, bootstrapping usually refers to a self-starting process that is supposed to continue or grow without external input. Etymology Tall boots may have a tab, loop or handle at the top known as a bootstrap, allowing one to use fingers ...
. ''Alpha's'' distribution gives rise to two indices: *The
confidence intervals In frequentist statistics, a confidence interval (CI) is a range of estimates for an unknown parameter. A confidence interval is computed at a designated ''confidence level''; the 95% confidence level is most common, but other levels, such as 9 ...
of a computed ''alpha'' at various levels of statistical significance *The probability that ''alpha'' fails to achieve a chosen minimum, required for data to be considered sufficiently reliable (one-tailed test). This index acknowledges that the null-hypothesis (of chance agreement) is so far removed from the range of relevant ''alpha'' coefficients that its rejection would mean little regarding how reliable given data are. To be judged reliable, data must not significantly deviate from perfect agreement. The minimum acceptable ''alpha'' coefficient should be chosen according to the importance of the conclusions to be drawn from imperfect data. When the costs of mistaken conclusions are high, the minimum ''alpha'' needs to be set high as well. In the absence of knowledge of the risks of drawing false conclusions from unreliable data, social scientists commonly rely on data with reliabilities ''α'' ≥ 0.800, consider data with 0.800 > ''α'' ≥ 0.667 only to draw tentative conclusions, and discard data whose agreement measures α < 0.667.


A computational example

Let the canonical form of reliability data be a 3-coder-by-15 unit matrix with 45 cells: Suppose “*” indicates a default category like “cannot code,” “no answer,” or “lacking an observation.” Then, * provides no information about the reliability of data in the four values that matter. Note that unit 2 and 14 contains no information and unit 1 contains only one value, which is not pairable within that unit. Thus, these reliability data consist not of ''mN'' = 45 but of ''n'' = 26 pairable values, not in ''N'' = 15 but in 12 multiply coded units. The coincidence matrix for these data would be constructed as follows: :''o''11 = : \frac + : \frac + : \frac =6 :''o''13 = : \frac =1= ''o''31 :''o''22 = : \frac + : \frac =4 :''o''33 = : \frac + : \frac + : \frac + : \frac =7 :''o''34 = : \frac + : \frac =2= ''o''43 :''o''44 = : \frac =3 In terms of the entries in this coincidence matrix, Krippendorff's ''alpha'' may be calculated from: :\alpha_\text = 1 - \frac = 1 - \frac. For convenience, because products with \delta(v,v) = 0 and \delta(v,v') = \delta(v',v), only the entries in one of the off-diagonal triangles of the coincidence matrix are listed in the following: :\alpha_\text = 1 - \frac Considering that all \delta_\text(v,v') = 1 when v v' for nominal data the above expression yields: :\alpha_\text = 1 - \frac =0.691 With \delta_\text(1,2)= \delta_\text(2,3)= \delta_\text(3,4) = 1^2, \qquad \delta_\text(1,3) = \delta_\text(2,4)=2^2, \text \delta_\text(1,4)=3^2, for interval data the above expression yields: :\alpha_\text = 1 - \frac = 0.811 Here, \alpha_\text > \alpha_\text because disagreements happens to occur largely among neighboring values, visualized by occurring closer to the diagonal of the coincidence matrix, a condition that \alpha_\text takes into account but \alpha_\text does not. When the observed frequencies ''o''''v'' ≠ ''v''′ are on the average proportional to the expected frequencies ev ≠ v', \alpha_\text = \alpha_\text. Comparing ''alpha'' coefficients across different metrics can provide clues to how coders conceptualize the metric of a variable.


Alpha's embrace of other statistics

Krippendorff's ''alpha'' brings several known statistics under a common umbrella, each of them has its own limitations but no additional virtues. * Scott's ''pi'' is an agreement coefficient for nominal data and two coders. \pi = \frac \text P_o = \sum_c \frac, \text P_e = \sum_c \frac. When data are nominal, ''alpha'' reduces to a form resembling Scott’s ''pi'': _\text\alpha = 1 - \frac = \frac = \frac Scott’s observed proportion of agreement \ P_o appears in ''alpha’s'' numerator, exactly. Scott’s expected proportion of agreement, P_e = \sum_c \frac is asymptotically approximated by \sum_c \frac when the sample size ''n'' is large, equal when infinite. It follows that Scott’s ''pi'' is that special case of ''alpha'' in which two coders generate a very large sample of nominal data. For finite sample sizes: = 1 - \tfrac (1-\pi) \ge \pi. Evidently, \lim_ = \pi. * Fleiss’ ''kappa'' is an agreement coefficient for nominal data with very large sample sizes where a set of coders have assigned exactly ''m'' labels to all of ''N'' units without exception (but note, there may be more than ''m'' coders, and only some subset label each instance). Fleiss claimed to have extended Cohen's ''kappa'' to three or more raters or coders, but generalized Scott's ''pi'' instead. This confusion is reflected in Fleiss’ choice of its name, which has been recognized by renaming it ''K'': K = \frac \text \bar P = \frac \sum_^N \sum_c \frac = \sum_c \frac, \text \bar P_e = \sum_c \frac When sample sizes are finite, ''K'' can be seen to perpetrate the inconsistency of obtaining the proportion of observed agreements \bar P by counting matches within the ''m''(''m'' − 1) possible pairs of values within ''u'', properly ''excluding'' values paired with themselves, while the proportion \bar P_e is obtained by counting matches within all (''mN'')2 = ''n''2 possible pairs of values, effectively ''including'' values paired with themselves. It is the latter that introduces a bias into the coefficient. However, just as for ''pi'', when sample sizes become very large this bias disappears and the proportion \sum_c \frac in nominal''α'' above asymptotically approximates \bar P_e in ''K''. Nevertheless, Fleiss' ''kappa'', or rather ''K'', intersects with ''alpha'' in that special situation in which a fixed number of ''m'' coders code all of ''N'' units (no data are missing), using nominal categories, and the sample size ''n'' = ''mN'' is very large, theoretically infinite. *
Spearman's rank correlation coefficient In statistics, Spearman's rank correlation coefficient or Spearman's ''ρ'', named after Charles Spearman and often denoted by the Greek letter \rho (rho) or as r_s, is a nonparametric measure of rank correlation ( statistical dependence between ...
''rho'' measures the agreement between two coders’ ranking of the same set of ''N'' objects. In its original form: \rho = 1 - \frac , where \sum D^2 = \sum_^N_^2 is the sum of ''N'' differences between one coder’s rank ''c'' and the other coder’s rank ''k'' of the same object ''u''. Whereas ''alpha'' accounts for tied ranks in terms of their frequencies for all coders, ''rho'' averages them in each individual coder's instance. In the absence of ties, \rho's numerator \sum D^2=ND_o and \rho's denominator \frac= \frac ND_e, where ''n'' = 2''N'', which becomes \ ND_e when sample sizes become large. So, Spearman’s ''rho'' is that special case of ''alpha'' in which two coders rank a very large set of units. Again, \ge \rho and \lim_ = \rho. * Pearson's ''intraclass correlation'' coefficient ''rii'' is an agreement coefficient for interval data, two coders, and very large sample sizes. To obtain it, Pearson's original suggestion was to enter the observed pairs of values twice into a table, once as ''c'' − ''k'' and once as ''k'' − ''c'', to which the traditional
Pearson product-moment correlation coefficient In statistics, the Pearson correlation coefficient (PCC, pronounced ) ― also known as Pearson's ''r'', the Pearson product-moment correlation coefficient (PPMCC), the bivariate correlation, or colloquially simply as the correlation coefficient ...
is then applied. By entering pairs of values twice, the resulting table becomes a coincidence matrix without reference to the two coders, contains ''n'' = 2''N'' values, and is symmetrical around the diagonal, i.e., the joint linear regression line is forced into a 45° line, and references to coders are eliminated. Hence, Pearson's ''intraclass correlation'' coefficient is that special case of interval ''alpha'' for two coders and large sample sizes, \ge r_ and \lim_ = r_{ii}. *Finally, The disagreements in the interval ''alpha'', ''Du'', ''Do'' and ''De'' are proper sample
variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers ...
s. It follows that the reliability the interval ''alpha'' assesses is consistent with all variance-based analytical techniques, such as the
analysis of variance Analysis of variance (ANOVA) is a collection of statistical models and their associated estimation procedures (such as the "variation" among and between groups) used to analyze the differences among means. ANOVA was developed by the statisticia ...
. Moreover, by incorporating difference functions not just for interval data but also for nominal, ordinal, ratio, polar, and circular data, ''alpha'' extends the notion of variance to
metrics Metric or metrical may refer to: * Metric system, an internationally adopted decimal system of measurement * An adjective indicating relation to measurement in general, or a noun describing a specific type of measurement Mathematics In mathema ...
that classical analytical techniques rarely address. Krippendorff's ''alpha'' is more general than any of these special purpose coefficients. It adjusts to varying sample sizes and affords comparisons across a wide variety of reliability data, mostly ignored by the familiar measures.


Coefficients incompatible with alpha and the reliability of coding

Semantically, reliability is the ability to rely on something, here on coded data for subsequent analysis. When a sufficiently large number of coders agree perfectly on what they have read or observed, relying on their descriptions is a safe bet. Judgments of this kind hinge on the number of coders duplicating the process and how representative the coded units are of the population of interest. Problems of interpretation arise when agreement is less than perfect, especially when reliability is absent. *Correlation and association coefficients. Pearson's product-moment correlation coefficient ''rij'', for example, measures deviations from any linear regression line between the coordinates of ''i'' and ''j''. Unless that regression line happens to be exactly 45° or centered, ''rij'' does not measure agreement. Similarly, while perfect agreement between coders also means perfect association, association statistics register any above chance pattern of relationships between variables. They do not distinguish agreement from other associations and are, hence, unsuitable as reliability measures. *Coefficients measuring the degree to which coders are statistically dependent on each other. When the reliability of coded data is at issue, the individuality of coders can have no place in it. Coders need to be treated as interchangeable. ''Alpha'', Scott's ''pi'', and Pearson's original ''intraclass correlation'' accomplish this by being definable as a function of coincidences, not only of contingencies. Unlike the more familiar contingency matrices, which tabulate ''N pairs'' of values and maintain reference to the two coders, coincidence matrices tabulate the ''n'' pairable ''values'' used in coding, regardless of who contributed them, in effect treating coders as interchangeable. Cohen's ''kappa'', by contrast, defines expected agreement in terms of contingencies, as the agreement that would be expected if coders were statistically independent of each other. Cohen's conception of chance fails to include disagreements between coders’ individual predilections for particular categories, punishes coders who agree on their use of categories, and rewards those who do not agree with higher ''kappa''-values. This is the cause of other noted oddities of ''kappa''. The statistical independence of coders is only marginally related to the statistical independence of the units coded and the values assigned to them. Cohen's ''kappa'', by ignoring crucial disagreements, can become deceptively large when the reliability of coding data is to be assessed. *Coefficients measuring the consistency of coder judgments. In the psychometric literature, reliability tends to be defined as the consistency with which several tests perform when applied to a common set of individual characteristics.
Cronbach's alpha Cronbach's alpha (Cronbach's \alpha), also known as tau-equivalent reliability (\rho_T) or coefficient alpha (coefficient \alpha), is a reliability coefficient that provides a method of measuring internal consistency of tests and measures. Numero ...
, for example, is designed to assess the degree to which multiple tests produce correlated results. Perfect agreement is the ideal, of course, but Cronbach's alpha is high also when test results vary systematically. Consistency of coders’ judgments does not provide the needed assurances of data reliability. Any deviation from identical judgments – systematic or random – needs to count as disagreement and reduce the measured reliability. Cronbach's alpha is not designed to respond to absolute differences. *Coefficients with baselines (conditions under which they measure 0) that cannot be interpreted in terms of reliability, i.e. have no dedicated value to indicate when the units and the values assigned to them are statistically unrelated. Simple %-agreement ranges from 0 = extreme disagreement to 100 = perfect agreement with chance having no definite value. As already noted, Cohen's ''kappa'' falls into this category by defining the absence of reliability as the statistical independence between two individual coders. The baseline of Bennett, Alpert, and Goldstein's ''S'' is defined in terms of the number of values available for coding, which has little to do with how values are actually used. Goodman and Kruskal's lambda''r'' is defined to vary between –1 and +1, leaving 0 without a particular reliability interpretation. Lin's reproducibility or concordance coefficient ''rc'' takes Pearson's ''product moment correlation'' ''rij'' as a measure of precision and adds to it a measure ''Cb'' of accuracy, ostensively to correct for ''rijs above mentioned inadequacy. It varies between –1 and +1 and the reliability interpretation of 0 is uncertain. There are more so-called reliability measures whose reliability interpretations become questionable as soon as they deviate from perfect agreement. Naming a statistic as one of agreement, reproducibility, or reliability does not make it a valid index of whether one can rely on coded data in subsequent decisions. Its mathematical structure must fit the process of coding units into a system of analyzable terms.


Notes

* K. Krippendorff, 2013, Content Analysis: An Introduction to Its Methodology, 3rd ed. Thousand Oaks, CA, USA: Sage, PP. 221–250


References

*Bennett, Edward M., Alpert, R. & Goldstein, A. C. (1954). Communications through limited response questioning. ''Public Opinion Quarterly, 18'', 303–308. *Brennan, Robert L. & Prediger, Dale J. (1981). Coefficient kappa: Some uses, misuses, and alternatives. ''Educational and Psychological Measurement, 41'', 687–699. *Cohen, Jacob (1960). A coefficient of agreement for nominal scales. ''Educational and Psychological Measurement, 20'' (1), 37–46. *Cronbach, Lee, J. (1951). Coefficient alpha and the internal structure of tests. ''Psychometrika, 16'' (3), 297–334. *Fleiss, Joseph L. (1971). Measuring nominal scale agreement among many raters. ''Psychological Bulletin, 76'', 378–382. * Goodman, Leo A. & Kruskal, William H. (1954). Measures of association for cross classifications. ''Journal of the American Statistical Association, 49'', 732–764. *Hayes, Andrew F. & Krippendorff, Klaus (2007). Answering the call for a standard reliability measure for coding data. ''Communication Methods and Measures, 1'', 77–89. *Krippendorff, Klaus (2013). ''Content analysis: An introduction to its methodology, 3rd edition''. Thousand Oaks, CA: Sage. *Krippendorff, Klaus (1978). Reliability of binary attribute data. ''Biometrics,'' 34 (1), 142–144. *Krippendorff, Klaus (1970). Estimating the reliability, systematic error, and random error of interval data. ''Educational and Psychological Measurement, 30'' (1), 61–70. *Lin, Lawrence I. (1989). A concordance correlation coefficient to evaluate reproducibility. ''Biometrics, 45'', 255–268. *Nunnally, Jum C. & Bernstein, Ira H. (1994). ''Psychometric Theory, 3rd ed''. New York: McGraw-Hill. *Pearson, Karl, et al. (1901). Mathematical contributions to the theory of evolution. IX: On the principle of homotyposis and its relation to heredity, to variability of the individual, and to that of race. Part I: Homotyposis in the vegetable kingdom. ''Philosophical Transactions of the Royal Society (London), Series A, 197'', 285–379. *Scott, William A. (1955). Reliability of content analysis: The case of nominal scale coding. ''Public Opinion Quarterly, 19'', 321–325. *Siegel, Sydney & Castella, N. John (1988). ''Nonparametric Statistics for the Behavioral Sciences, 2nd ed''. Boston: McGraw-Hill. *Tildesley, M. L. (1921). A first study of the Burmes skull. ''Biometrica, 13'', 176–267. *Spearman, Charles E. (1904). The proof and measurement of association between two things. ''American Journal of Psychology, 15'', 72–101. *Zwick, Rebecca (1988). Another look at interrater agreement. ''Psychological Bulletin, 103'' (3), 347–387.


External links


Youtube video about Krippendorff’s alpha
using SPSS and a macro.
Reliability Calculator
calculates Krippendorff's alpha.
Krippendorff Alpha Javascript
implementation and library
Python
implementation
Krippendorff Alpha Ruby Gem
implementation and library.
Simpledorff
Python implementation that works with Dataframes Qualitative research Inter-rater reliability